13 research outputs found

    SCALABLE ALGORITHMS FOR HIGH DIMENSIONAL STRUCTURED DATA

    Get PDF
    Emerging technologies and digital devices provide us with increasingly large volume of data with respect to both the sample size and the number of features. To explore the benefits of massive data sets, scalable statistical models and machine learning algorithms are more and more important in different research disciplines. For robust and accurate prediction, prior knowledge regarding dependency structures within data needs to be formulated appropriately in these models. On the other hand, scalability and computation complexity of existing algorithms may not meet the needs to analyze massive high-dimensional data. This dissertation presents several novel methods to scale up sparse learning models to analyze massive data sets. We first present our novel safe active incremental feature (SAIF) selection algorithm for LASSO (least absolute shrinkage and selection operator), with the time complexity analysis to show the advantages over state of the art existing methods. As SAIF is targeting general convex loss functions, it potentially can be extended to many learning models and big-data applications, and we show how support vector machines (SVM) can be scaled up based on the idea of SAIF. Secondly, we propose screening methods to generalized LASSO (GL), which specifically considers the dependency structure among features. We also propose a scalable feature selection method for non-parametric, non-linear models based on sparse structures and kernel methods. Theoretical analysis and experimental results in this dissertation show that model complexity can be significantly reduced with the sparsity and structure assumptions

    Free-hand sketch synthesis with deformable stroke models

    Get PDF
    We present a generative model which can automatically summarize the stroke composition of free-hand sketches of a given category. When our model is fit to a collection of sketches with similar poses, it discovers and learns the structure and appearance of a set of coherent parts, with each part represented by a group of strokes. It represents both consistent (topology) as well as diverse aspects (structure and appearance variations) of each sketch category. Key to the success of our model are important insights learned from a comprehensive study performed on human stroke data. By fitting this model to images, we are able to synthesize visually similar and pleasant free-hand sketches

    Human-Object-Object-Interaction Affordance

    No full text
    This paper presents a novel human-object-object (HOO) interaction affordance learning approach that models the interaction motions between paired objects in a human-object-object way and use the motion models to improve the object recognition reliability. The innate interaction-affordance knowledge of the paired objects is modeled from a set of labeled training data that contains relative motions of the paired objects, humans actions, and object labels. The learned knowledge of the pair relationship is represented with a Bayesian Network and the trained network is used to improve recognition reliability of the objects

    Best Subset Selection with Efficient Primal-Dual Algorithm

    Full text link
    Best subset selection is considered the `gold standard' for many sparse learning problems. A variety of optimization techniques have been proposed to attack this non-convex and NP-hard problem. In this paper, we investigate the dual forms of a family of â„“0\ell_0-regularized problems. An efficient primal-dual method has been developed based on the primal and dual problem structures. By leveraging the dual range estimation along with the incremental strategy, our algorithm potentially reduces redundant computation and improves the solutions of best subset selection. Theoretical analysis and experiments on synthetic and real-world datasets validate the efficiency and statistical properties of the proposed solutions.Comment: arXiv admin note: text overlap with arXiv:1703.00119 by other author

    Object–Object Interaction Affordance Learning

    No full text
    This paper presents a novel object–object affordance learning approach that enables intelligent robots to learn the interactive functionalities of objects from human demonstrations in everyday environments. Instead of considering a single object, we model the interactive motions between paired objects in a human–object–object way. The innate interaction-affordance knowledge of the paired objects are learned from a labeled training dataset that contains a set of relative motions of the paired objects, human actions, and object labels. The learned knowledge is represented with a Bayesian Network, and the network can be used to improve the recognition reliability of both objects and human actions and to generate proper manipulation motion for a robot if a pair of objects is recognized. This paper also presents an image-based visual serving approach that uses the learned motion features of the affordance in interaction as the control goals to control a robot to perform manipulation tasks

    Learning Grasping Force from Demonstration

    No full text
    This paper presents a novel force learning framework to learn fingertip force for a grasping and manipulation process from a human teacher with a force imaging approach. A demonstration station is designed to measure fingertip force without attaching force sensor on fingertips or objects so that this approach can be used with daily living objects. A Gaussian Mixture Model (GMM) based machine learning approach is applied on the fingertip force and position to obtain the motion and force model. Then a force and motion trajectory is generated with Gaussian Mixture Regression (GMR) from the learning result. The force and motion trajectory is applied to a robotic arm and hand to carry out a grasping and manipulation task. An experiment was designed and carried out to verify the learning framework by teaching a Fanuc robotic arm and a BarrettHand a pick-and-place task with demonstration. Experimental results show that the robot applied proper motions and forces in the pick-and-place task from the learned model

    Safe Feature Screening for Generalized LASSO

    No full text
    corecore